Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

نویسندگان

  • Kristoffer Carl Aberg
  • Kimberly C Doell
  • Sophie Schwartz
چکیده

Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Differential magnitude coding of gains and omitted rewards in the ventral striatum.

Physiologic studies revealed that neurons in the dopaminergic midbrain of non-human primates encode reward prediction errors. It was furthermore shown that reward prediction errors are adaptively scaled with respect to the range of possible outcomes, enabling sensitive encoding for a large range of reward values. Congruently, neuroimaging studies in humans demonstrated that BOLD-responses in th...

متن کامل

Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum

Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in suppor...

متن کامل

A Tribute to Charlie Chaplin: Induced Positive Affect Improves Reward-Based Decision-Learning in Parkinson’s Disease

Reward-based decision-learning refers to the process of learning to select those actions that lead to rewards while avoiding actions that lead to punishments. This process, known to rely on dopaminergic activity in striatal brain regions, is compromised in Parkinson's disease (PD). We hypothesized that such decision-learning deficits are alleviated by induced positive affect, which is thought t...

متن کامل

Spontaneous eye blink rate predicts learning from negative, but not positive, outcomes.

A large body of research shows that striatal dopamine critically affects the extent to which we learn from the positive and negative outcomes of our decisions. In this study, we examined the relationship between reinforcement learning and spontaneous eye blink rate (sEBR), a cheap, non-invasive, and easy to obtain marker of striatal dopaminergic activity. Based on previous findings from pharmac...

متن کامل

Adaptive coding of reward prediction errors is gated by striatal coupling.

To efficiently represent all of the possible rewards in the world, dopaminergic midbrain neurons dynamically adapt their coding range to the momentarily available rewards. Specifically, these neurons increase their activity for an outcome that is better than expected and decrease it for an outcome worse than expected, independent of the absolute reward magnitude. Although this adaptive coding i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • The Journal of neuroscience : the official journal of the Society for Neuroscience

دوره 35 43  شماره 

صفحات  -

تاریخ انتشار 2015